# Multi-task learning

So100 Finetune Marker
Apache-2.0
A robot strategy model trained based on the LeRobot framework for specific task execution
Multimodal Fusion Safetensors
S
Ziang-Li
135
0
AMRBART Large V2
MIT
AMRBART is a pre-trained semantic parser capable of converting sentences into Abstract Meaning Representation (AMR) graphs. The latest version, v2, is simpler, faster, and more powerful.
Knowledge Graph Transformers English
A
xfbai
49
2
Kogptv3 Contextbasedv4
This model is based on the transformers library, and its specific functions and uses require further information to be supplemented.
Large Language Model Transformers
K
KingKDB
90
2
Mtmme Merge Gemma 2 9B NuSLERP W0.7 0.3
A variant of the Gemma-2B model fused using the SLERP method, combining two different weighted versions of the Gemma-2B model
Large Language Model Transformers
M
zelk12
16
2
Alphatable 1.5B
This model is based on the transformers library, but its specific purpose and functionality are not clearly stated.
Large Language Model Transformers
A
jan-hq
72
2
Migician
Apache-2.0
The Magician is the first multi-modal large language model with free-form multi-image localization capabilities, achieving precise localization in complex multi-image scenarios and outperforming models with a scale of 70B in performance.
Text-to-Image Transformers English
M
Michael4933
83
1
Qg Double Rm75 Pour Water
A motion control model based on PyTorch, suitable for the field of robotics
Object Detection Safetensors
Q
ConnorJiang
2
0
Deberta V3 Base Daigenc Mgt1a
MIT
This is a binary classification model for machine-generated text, which won first place in the monolingual subtask of the COLING 2025 GenAI detection task.
Text Classification Transformers English
D
OU-Advacheck
396
9
Act Aloha Test
A motion control model based on PyTorch, focusing on motion control tasks in the field of robotics.
Multimodal Fusion
A
TrossenRoboticsCommunity
7
0
KAIROS Ast Fake Audio Detection Unsupervised
This model is based on the Transformers library, and its specific purpose and functionality require further information to determine.
Large Language Model Transformers
K
012shin
41
3
Serafim 100m Portuguese Pt Sentence Encoder
MIT
A Portuguese sentence encoder based on sentence-transformers that can map text to a 768-dimensional vector space, suitable for semantic search and clustering tasks.
Text Embedding Other
S
PORTULAN
2,254
1
TURNA GGUF
Other
TURNA is a Turkish encoder-decoder language model focused on enhancing understanding and generation capabilities.
Large Language Model Transformers
T
helizac
159
3
Ptt5 V2 Base
Apache-2.0
ptt5-v2 is a series of Portuguese pre-trained T5 models, continued training based on Google's original checkpoint.
Large Language Model Transformers Other
P
unicamp-dl
1,197
2
Ptt5 V2 Small
Apache-2.0
A pre-trained T5 model optimized for Portuguese, based on continued training from Google's original t5-small checkpoint
Large Language Model Transformers Other
P
unicamp-dl
85
1
Kosaul Sft V0.2
This model is based on the transformers library, and its specific purpose and functionality require further information to determine.
Large Language Model Transformers
K
ingeol
21
1
Decilm 7B Instagram Post Generation
This model is based on the transformers library, and its specific purpose and functionality require further information.
Large Language Model Transformers
D
jjgerbo
16
1
Cabraqwen7b
CC
A Portuguese-optimized model fine-tuned from Qwen1.5-7B-Chat, excelling in multiple Brazilian benchmark tests
Large Language Model Supports Multiple Languages
C
botbot-ai
15
4
Deepseek Rust 1.3b Lora
This model is based on the transformers library, and its specific purpose and functionality require further information for confirmation.
Large Language Model Transformers
D
ysr
27
3
Deberta Small Long Nli
Apache-2.0
Based on the DeBERTa-v3-small model, extended context length to 1680 tokens, fine-tuned on the tasksource dataset, suitable for long-text natural language inference tasks
Large Language Model Transformers Supports Multiple Languages
D
tasksource
40.85k
42
Kafkalm 13B German V0.1
German language model based on Llama2 architecture, focused on German text generation tasks
Large Language Model Transformers German
K
seedboxai
16
6
Kf Deberta Multitask
This is a Korean sentence embedding model based on sentence-transformers, capable of mapping sentences and paragraphs into a 768-dimensional dense vector space, suitable for tasks such as clustering or semantic search.
Text Embedding Transformers Korean
K
upskyy
1,866
15
Deberta V3 Xsmall Zeroshot V1.1 All 33
MIT
This is a small and efficient zero-shot classification model, fine-tuned based on microsoft/deberta-v3-xsmall, specifically designed for edge devices or in-browser use cases.
Text Classification Transformers English
D
MoritzLaurer
96.01k
4
Sbert Large Mt Ru Retriever
MIT
This model maps sentences and paragraphs into a 1024-dimensional vector space, suitable for tasks such as sentence similarity calculation, clustering, and semantic search.
Text Embedding Transformers Other
S
Den4ikAI
139
2
Fine Tune Chinese Sentiment
Due to limited document information, a brief introduction to the model cannot be provided.
Large Language Model Transformers
F
DavidLanz
111
7
Mpnet Mnr V2 Fine Tuned
This is a model based on sentence-transformers that can map sentences and paragraphs into a 768-dimensional dense vector space, suitable for tasks such as clustering or semantic search.
Text Embedding Transformers
M
BlazingFringe
94
2
Deberta V3 Base Tasksource Nli
Apache-2.0
A model fine-tuned on 600+ tasks based on DeBERTa-v3-base through multi-task learning, excelling in zero-shot classification and natural language inference
Text Classification Transformers Supports Multiple Languages
D
sileod
182.30k
124
T5 V1 1 Base Ko
Apache-2.0
T5 1.1 model trained on Korean corpus, utilizing BBPE technology and MeCab morpheme analysis for optimized tokenization
Large Language Model Korean
T
team-lucid
18
3
Ernie 3.0 Xbase Zh
ERNIE 3.0 is a large-scale knowledge-enhanced pre-trained model for language understanding and generation, developed by Baidu.
Large Language Model Transformers Chinese
E
nghuyong
14.27k
20
Ruleanalbert
Apache-2.0
RuLeanALBERT is a memory-efficient masked language model pretrained specifically for Russian.
Large Language Model Transformers Other
R
yandex
80
35
Bartpho Word Base
MIT
BARTpho is a pre-trained sequence-to-sequence model for Vietnamese, adopting BART's 'base' architecture and pre-training scheme, specifically optimized for Vietnamese text processing tasks.
Large Language Model Transformers
B
vinai
1,313
3
Ernie 3.0 Nano Zh
ERNIE 3.0 is a large-scale knowledge-enhanced pre-training model for language understanding and generation, with efficient language processing capabilities.
Large Language Model Transformers Chinese
E
nghuyong
261
25
Ernie 3.0 Micro Zh
ERNIE 3.0 is a large-scale knowledge-enhanced pre-training model launched by Baidu for language understanding and generation, with micro-zh being its small Chinese version.
Large Language Model Transformers Chinese
E
nghuyong
340
2
Ernie 3.0 Medium Zh
ERNIE 3.0 is a large-scale knowledge-enhanced pre-trained model for language understanding and generation, developed by Baidu.
Large Language Model Transformers Chinese
E
nghuyong
484
6
Sentece Embeddings BETO
A Spanish BERT model based on sentence-transformers for generating 768-dimensional vector representations of sentences and paragraphs
Text Embedding Transformers
S
espejelomar
75
1
Kosimcse Bert Multitask
KoSimCSE-BERT-multitask is a high-performance Korean sentence embedding model based on BERT architecture and optimized with multi-task learning strategy, specifically designed for generating high-quality Korean sentence embeddings.
Text Embedding Transformers Korean
K
BM-K
827
8
Kosimcse Roberta
A Korean sentence vector embedding model based on the RoBERTa architecture, optimized for sentence representation through contrastive learning, suitable for tasks such as semantic similarity calculation.
Text Embedding Transformers Korean
K
BM-K
10.35k
18
Kpfbert
KpfBERT is a Korean pre-trained language model based on the BERT architecture, released by individual developer jinmang2.
Large Language Model Transformers
K
jinmang2
29.46k
4
Chinese Roberta Wwm Ext
Apache-2.0
This is an open-source model licensed under Apache-2.0, specific functionalities require further confirmation
Large Language Model Transformers
C
quincyqiang
24
0
Ko Sroberta Multitask
This is a Korean sentence embedding model based on sentence-transformers, capable of mapping sentences and paragraphs into a 768-dimensional dense vector space, suitable for tasks such as clustering or semantic search.
Text Embedding Korean
K
jhgan
162.23k
115
Nli Deberta V3 Large
Apache-2.0
A natural language inference model based on the DeBERTa-v3-large architecture, trained on SNLI and MultiNLI datasets for determining the relationship between sentence pairs.
Text Classification Transformers English
N
cross-encoder
203.73k
31
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase